537 research outputs found

    The implementation research institute: Training mental health implementation researchers in the United States

    Get PDF
    Abstract Background The Implementation Research Institute (IRI) provides two years of training in mental health implementation science for 10 new fellows each year. The IRI is supported by a National Institute of Mental Health (NIMH) R25 grant and the Department of Veterans Affairs (VA). Fellows attend two annual week-long trainings at Washington University in St. Louis. Training is provided through a rigorous curriculum, local and national mentoring, a ā€˜learning site visitā€™ to a federally funded implementation research project, pilot research, and grant writing. Methods This paper describes the rationale, components, outcomes to date, and participant experiences with IRI. Results IRI outcomes include 31 newly trained implementation researchers, their new grant proposals, contributions to other national dissemination and implementation research training, and publications in implementation science authored by the Core Faculty and fellows. Former fellows have obtained independent research funding in implementation science and are beginning to serve as mentors for more junior investigators. Conclusions Based on the number of implementation research grant proposals and papers produced by fellows to date, the IRI is proving successful in preparing new researchers who can inform the process of making evidence-based mental healthcare more available through real-world settings of care and who are advancing the field of implementation science

    Organizational Readiness in Specialty Mental Health Care

    Get PDF
    Implementing quality improvement efforts in clinics is challenging. Assessment of organizational ā€œreadinessā€ for change can set the stage for implementation by providing information regarding existing strengths and deficiencies, thereby increasing the chance of a successful improvement effort. This paper discusses organizational assessment in specialty mental health, in preparation for improving care for individuals with schizophrenia. To assess organizational readiness for change in specialty mental health in order to facilitate locally tailored implementation strategies. EQUIP-2 is a site-level controlled trial at nine VA medical centers (four intervention, five control). Providers at all sites completed an organizational readiness for change (ORC) measure, and key stakeholders at the intervention sites completed a semi-structured interview at baseline. At the four intervention sites, 16 administrators and 43 clinical staff completed the ORC, and 38 key stakeholders were interviewed. The readiness domains of training needs, communication, and change were the domains with lower mean scores (i.e., potential deficiencies) ranging from a low of 23.8 to a high of 36.2 on a scale of 10ā€“50, while staff attributes of growth and adaptability had higher mean scores (i.e., potential strengths) ranging from a low of 35.4 to a high of 41.1. Semi-structured interviews revealed that staff perceptions and experiences of change and decision-making are affected by larger structural factors such as change mandates from VA headquarters. Motivation for change, organizational climate, staff perceptions and beliefs, and prior experience with change efforts contribute to readiness for change in specialty mental health. Sites with less readiness for change may require more flexibility in the implementation of a quality improvement intervention. We suggest that uptake of evidence-based practices can be enhanced by tailoring implementation efforts to the strengths and deficiencies of the organizations that are implementing quality improvement changes

    A mixed methods multiple case study of implementation as usual in childrenā€™s social service organizations: study protocol

    Get PDF
    Background Improving quality in childrenā€™s mental health and social service settings will require implementation strategies capable of moving effective treatments and other innovations (e.g., assessment tools) into routine care. It is likely that efforts to identify, develop, and refine implementation strategies will be more successful if they are informed by relevant stakeholders and are responsive to the strengths and limitations of the contexts and implementation processes identified in usual care settings. This study will describe: the types of implementation strategies used; how organizational leaders make decisions about what to implement and how to approach the implementation process; organizational stakeholdersā€™ perceptions of different implementation strategies; and the potential influence of organizational culture and climate on implementation strategy selection, implementation decision-making, and stakeholdersā€™ perceptions of implementation strategies. Methods/design This study is a mixed methods multiple case study of seven childrenā€™s social service organizations in one Midwestern city in the United States that compose the control group of a larger randomized controlled trial. Qualitative data will include semi-structured interviews with organizational leaders (e.g., CEOs/directors, clinical directors, program managers) and a review of documents (e.g., implementation and quality improvement plans, program manuals, etc.) that will shed light on implementation decision-making and specific implementation strategies that are used to implement new programs and practices. Additionally, focus groups with clinicians will explore their perceptions of a range of implementation strategies. This qualitative work will inform the development of a Web-based survey that will assess the perceived effectiveness, relative importance, acceptability, feasibility, and appropriateness of implementation strategies from the perspective of both clinicians and organizational leaders. Finally, the Organizational Social Context measure will be used to assess organizational culture and climate. Qualitative, quantitative, and mixed methods data will be analyzed and interpreted at the case level as well as across cases in order to highlight meaningful similarities, differences, and site-specific experiences. Discussion This study is designed to inform efforts to develop more effective implementation strategies by fully describing the implementation experiences of a sample of community-based organizations that provide mental health services to youth in one Midwestern city

    Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors

    Get PDF
    Implementation science is a quickly growing discipline. Lessons learned from business and medical settings are being applied but it is unclear how well they translate to settings with different historical origins and customs (e.g., public mental health, social service, alcohol/drug sectors). The purpose of this paper is to propose a multi-level, four phase model of the implementation process (i.e., Exploration, Adoption/Preparation, Implementation, Sustainment), derived from extant literature, and apply it to public sector services. We highlight features of the model likely to be particularly important in each phase, while considering the outer and inner contexts (i.e., levels) of public sector service systems

    Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science

    Get PDF
    Abstract Background Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. Methods We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. Results The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. Conclusion The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.http://deepblue.lib.umich.edu/bitstream/2027.42/78272/1/1748-5908-4-50.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/2/1748-5908-4-50-S1.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/3/1748-5908-4-50-S3.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/4/1748-5908-4-50-S4.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/5/1748-5908-4-50.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78272/6/1748-5908-4-50-S2.PDFPeer Reviewe

    Advancing the argument for validity of the Alberta Context Tool with healthcare aides in residential long-term care

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Organizational context has the potential to influence the use of new knowledge. However, despite advances in understanding the theoretical base of organizational context, its measurement has not been adequately addressed, limiting our ability to quantify and assess context in healthcare settings and thus, advance development of contextual interventions to improve patient care. We developed the Alberta Context Tool (the ACT) to address this concern. It consists of 58 items representing 10 modifiable contextual concepts. We reported the initial validation of the ACT in 2009. This paper presents the second stage of the psychometric validation of the ACT.</p> <p>Methods</p> <p>We used the <it>Standards for Educational and Psychological Testing </it>to frame our validity assessment. Data from 645 English speaking healthcare aides from 25 urban residential long-term care facilities (nursing homes) in the three Canadian Prairie Provinces were used for this stage of validation. In this stage we focused on: (1) advanced aspects of internal structure (e.g., confirmatory factor analysis) and (2) relations with other variables validity evidence. To assess reliability and validity of scores obtained using the ACT we conducted: Cronbach's alpha, confirmatory factor analysis, analysis of variance, and tests of association. We also assessed the performance of the ACT when individual responses were aggregated to the care unit level, because the instrument was developed to obtain unit-level scores of context.</p> <p>Results</p> <p>Item-total correlations exceeded acceptable standards (> 0.3) for the majority of items (51 of 58). We ran three confirmatory factor models. Model 1 (all ACT items) displayed unacceptable fit overall and for five specific items (1 item on <it>adequate space for resident care </it>in the Organizational Slack-Space ACT concept and 4 items on use of electronic resources in the Structural and Electronic Resources ACT concept). This prompted specification of two additional models. Model 2 used the 7 scaled ACT concepts while Model 3 used the 3 count-based ACT concepts. Both models displayed substantially improved fit in comparison to Model 1. Cronbach's alpha for the 10 ACT concepts ranged from 0.37 to 0.92 with 2 concepts performing below the commonly accepted standard of 0.70. Bivariate associations between the ACT concepts and instrumental research utilization levels (which the ACT should predict) were statistically significant at the 5% level for 8 of the 10 ACT concepts. The majority (8/10) of the ACT concepts also showed a statistically significant trend of increasing mean scores when arrayed across the lowest to the highest levels of instrumental research use.</p> <p>Conclusions</p> <p>The validation process in this study demonstrated additional empirical support for construct validity of the ACT, when completed by healthcare aides in nursing homes. The overall pattern of the data was consistent with the structure hypothesized in the development of the ACT and supports the ACT as an appropriate measure for assessing organizational context in nursing homes. Caution should be applied in using the one space and four electronic resource items that displayed misfit in this study with healthcare aides until further assessments are made.</p
    • ā€¦
    corecore